文本摘要模型通常经过培训,以产生满足人类质量要求的摘要。但是,现有的摘要文本评估指标只是摘要质量的粗略代理,与人类评分和抑制摘要多样性的相关性低。为了解决这些问题,我们提出了SummScore,这是基于CrossCoder的摘要质量评估的综合指标。首先,通过采用原始的苏格拉外测量模式并比较原始文本的语义,SummScore摆脱了抑制摘要多样性的抑制。借助文本匹配的预训练交叉编码器,SummScore可以有效地捕获摘要语义之间的细微差异。其次,为了提高全面性和解释性,SummScore由四个细粒子模型组成,它们分别测量连贯性,一致性,流利性和相关性。我们使用半监督的多轮训练来提高模型在极有限的注释数据上的性能。广泛的实验表明,与人类评分相关的上述四个维度中,SummScore在上述四个维度中的现有评估指标显着优于现有的评估指标。我们还为16个主流摘要模型提供了SummScore的质量评估结果,以供以后研究。
translated by 谷歌翻译
文档检索使用户能够准确,快速找到所需的文档。为了满足检索效率的要求,普遍的深神经方法采用了基于表示的匹配范式,该范式通过离线预先存储文档表示节省了在线匹配时间。但是,上述范式会消耗庞大的本地存储空间,尤其是将文档存储为单词元素表示时。为了解决这个问题,我们提出了TGTR,这是一种基于主题的文本表示模型,用于文档检索。遵循基于表示的匹配范式,TGTR将文档表示脱机存储以确保检索效率,而通过使用新颖的主题格式表示,而不是传统的单词元素,则大大降低了存储要求。实验结果表明,与单词粒度的基线相比,TGTR在检索准确性方面始终在TREC CAR和MS MARCO上竞争,但其所需的存储空间的少于1/10。此外,TGTR绝大多数在检索准确性方面超过了全球粒度的基线。
translated by 谷歌翻译
在本文中,我们通过利用全新监督学习来推进面部表情识别(FER)的表现。本领域技术的当前状态通常旨在通过具有有限数量的样本的培训模型来识别受控环境中的面部表达。为了增强学习模型的各种场景的稳健性,我们建议通过利用标记的样本以及大量未标记的数据来执行全能监督学习。特别是,我们首先使用MS-CeleB-1M作为面部池,其中包括大约5,822k未标记的面部图像。然后,采用基于少量标记样品的原始模型来通过进行基于特征的相似性比较来选择具有高度自信心的样本。我们发现以这种全局监督方式构建的新数据集可以显着提高学习的FER模型的泛化能力,并因此提高了性能。然而,随着使用更多的训练样本,需要更多的计算资源和培训时间,在许多情况下通常不能实惠。为了减轻计算资源的要求,我们进一步采用了数据集蒸馏策略,以将目标任务相关知识从新的开采样本中蒸馏,并将其压缩成一组非常小的图像。这种蒸馏的数据集能够提高FER的性能,额外的额外计算成本。我们在五个流行的基准和新构造的数据集中执行广泛的实验,其中可以使用所提出的框架在各种设置下实现一致的收益。我们希望这项工作作为一个坚实的基线,并帮助缓解FER的未来研究。
translated by 谷歌翻译
As the number of distributed services (or microservices) of cloud-native applications grows, resource management becomes a challenging task. These applications tend to be user-facing and latency-sensitive, and our goal is to continuously minimize the amount of CPU resources allocated while still satisfying the application latency SLO. Although previous efforts have proposed simple heuristics and sophisticated ML-based techniques, we believe that a practical resource manager should accurately scale CPU resources for diverse applications, with minimum human efforts and operation overheads. To this end, we ask: can we systematically break resource management down to subproblems solvable by practical policies? Based on the notion of CPU-throttle-based performance target, we decouple the mechanisms of SLO feedback and resource control, and implement a two-level framework -- Autothrottle. It combines a lightweight learned controller at the global level, and agile per-microservice controllers at the local level. We evaluate Autothrottle on three microservice applications, with both short-term and 21-day production workload traces. Empirical results show Autothrottle's superior CPU core savings up to 26.21% over the best-performing baselines across applications, while maintaining the latency SLO.
translated by 谷歌翻译
具有最小延迟的人工神经网络的决策对于诸如导航,跟踪和实时机器动作系统之类的许多应用来说是至关重要的。这要求机器学习硬件以高吞吐量处理多维数据。不幸的是,处理卷积操作是数据分类任务的主要计算工具,遵循有挑战性的运行时间复杂性缩放法。然而,在傅立叶光学显示器 - 光处理器中同心地实现卷积定理,使得不迭代的O(1)运行时复杂度以超过1,000×1,000大矩阵的数据输入。在此方法之后,这里我们展示了具有傅里叶卷积神经网络(FCNN)加速器的数据流多核图像批处理。我们将大规模矩阵的图像批量处理显示为傅立叶域中的数字光处理模块执行的被动的2000万点产品乘法。另外,我们通过利用多种时空衍射令并进一步并行化该光学FCNN系统,从而实现了最先进的FCNN加速器的98倍的产量改进。综合讨论与系统能力边缘工作相关的实际挑战突出了傅立叶域和决议缩放法律的串扰问题。通过利用展示技术中的大规模平行性加速卷积带来了基于VAN Neuman的机器学习加速度。
translated by 谷歌翻译
Weakly-supervised object localization aims to indicate the category as well as the scope of an object in an image given only the image-level labels. Most of the existing works are based on Class Activation Mapping (CAM) and endeavor to enlarge the discriminative area inside the activation map to perceive the whole object, yet ignore the co-occurrence confounder of the object and context (e.g., fish and water), which makes the model inspection hard to distinguish object boundaries. Besides, the use of CAM also brings a dilemma problem that the classification and localization always suffer from a performance gap and can not reach their highest accuracy simultaneously. In this paper, we propose a casual knowledge distillation method, dubbed KD-CI-CAM, to address these two under-explored issues in one go. More specifically, we tackle the co-occurrence context confounder problem via causal intervention (CI), which explores the causalities among image features, contexts, and categories to eliminate the biased object-context entanglement in the class activation maps. Based on the de-biased object feature, we additionally propose a multi-teacher causal distillation framework to balance the absorption of classification knowledge and localization knowledge during model training. Extensive experiments on several benchmarks demonstrate the effectiveness of KD-CI-CAM in learning clear object boundaries from confounding contexts and addressing the dilemma problem between classification and localization performance.
translated by 谷歌翻译
Knowledge graph embedding (KGE), which maps entities and relations in a knowledge graph into continuous vector spaces, has achieved great success in predicting missing links in knowledge graphs. However, knowledge graphs often contain incomplete triples that are difficult to inductively infer by KGEs. To address this challenge, we resort to analogical inference and propose a novel and general self-supervised framework AnKGE to enhance KGE models with analogical inference capability. We propose an analogical object retriever that retrieves appropriate analogical objects from entity-level, relation-level, and triple-level. And in AnKGE, we train an analogy function for each level of analogical inference with the original element embedding from a well-trained KGE model as input, which outputs the analogical object embedding. In order to combine inductive inference capability from the original KGE model and analogical inference capability enhanced by AnKGE, we interpolate the analogy score with the base model score and introduce the adaptive weights in the score function for prediction. Through extensive experiments on FB15k-237 and WN18RR datasets, we show that AnKGE achieves competitive results on link prediction task and well performs analogical inference.
translated by 谷歌翻译
When robots learn reward functions using high capacity models that take raw state directly as input, they need to both learn a representation for what matters in the task -- the task ``features" -- as well as how to combine these features into a single objective. If they try to do both at once from input designed to teach the full reward function, it is easy to end up with a representation that contains spurious correlations in the data, which fails to generalize to new settings. Instead, our ultimate goal is to enable robots to identify and isolate the causal features that people actually care about and use when they represent states and behavior. Our idea is that we can tune into this representation by asking users what behaviors they consider similar: behaviors will be similar if the features that matter are similar, even if low-level behavior is different; conversely, behaviors will be different if even one of the features that matter differs. This, in turn, is what enables the robot to disambiguate between what needs to go into the representation versus what is spurious, as well as what aspects of behavior can be compressed together versus not. The notion of learning representations based on similarity has a nice parallel in contrastive learning, a self-supervised representation learning technique that maps visually similar data points to similar embeddings, where similarity is defined by a designer through data augmentation heuristics. By contrast, in order to learn the representations that people use, so we can learn their preferences and objectives, we use their definition of similarity. In simulation as well as in a user study, we show that learning through such similarity queries leads to representations that, while far from perfect, are indeed more generalizable than self-supervised and task-input alternatives.
translated by 谷歌翻译
Vision-language models (VLMs) that are pre-trained on large-scale image-text pairs have demonstrated impressive transferability on a wide range of visual tasks. Transferring knowledge from such powerful pre-trained VLMs is emerging as a promising direction for building effective video recognition models. However, the current exploration is still limited. In our opinion, the greatest charm of pre-trained vision-language models is to build a bridge between visual and textual domains. In this paper, we present a novel framework called BIKE which utilizes the cross-modal bridge to explore bidirectional knowledge: i) We propose a Video Attribute Association mechanism which leverages the Video-to-Text knowledge to generate textual auxiliary attributes to complement video recognition. ii) We also present a Temporal Concept Spotting mechanism which uses the Text-to-Video expertise to capture temporal saliency in a parameter-free manner to yield enhanced video representation. The extensive studies on popular video datasets (ie, Kinetics-400 & 600, UCF-101, HMDB-51 and ActivityNet) show that our method achieves state-of-the-art performance in most recognition scenarios, eg, general, zero-shot, and few-shot video recognition. To the best of our knowledge, our best model achieves a state-of-the-art accuracy of 88.4% on challenging Kinetics-400 with the released CLIP pre-trained model.
translated by 谷歌翻译
There is a growing interest in developing unlearnable examples (UEs) against visual privacy leaks on the Internet. UEs are training samples added with invisible but unlearnable noise, which have been found can prevent unauthorized training of machine learning models. UEs typically are generated via a bilevel optimization framework with a surrogate model to remove (minimize) errors from the original samples, and then applied to protect the data against unknown target models. However, existing UE generation methods all rely on an ideal assumption called label-consistency, where the hackers and protectors are assumed to hold the same label for a given sample. In this work, we propose and promote a more practical label-agnostic setting, where the hackers may exploit the protected data quite differently from the protectors. E.g., a m-class unlearnable dataset held by the protector may be exploited by the hacker as a n-class dataset. Existing UE generation methods are rendered ineffective in this challenging setting. To tackle this challenge, we present a novel technique called Unlearnable Clusters (UCs) to generate label-agnostic unlearnable examples with cluster-wise perturbations. Furthermore, we propose to leverage VisionandLanguage Pre-trained Models (VLPMs) like CLIP as the surrogate model to improve the transferability of the crafted UCs to diverse domains. We empirically verify the effectiveness of our proposed approach under a variety of settings with different datasets, target models, and even commercial platforms Microsoft Azure and Baidu PaddlePaddle.
translated by 谷歌翻译